Search Results/Filters    

Filters

Year

Banks



Expert Group











Full-Text


Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    4
  • Pages: 

    105-119
Measures: 
  • Citations: 

    0
  • Views: 

    428
  • Downloads: 

    268
Abstract: 

In this paper, a new approach is presented to fit a robust fuzzy regression model based on some fuzzy quantities. In this approach, we first introduce a new distance between two fuzzy numbers using the kernel function, and then, based on the least squares method, the parameters of fuzzy regression model is estimated. The proposed approach has a suitable performance to present the robust fuzzy model in the presence of different types of outliers. Using some simulated data sets and some real data sets, the application of the proposed approach in modeling some characteristics with outliers, is studied.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 428

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 268 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    34
  • Issue: 

    4
  • Pages: 

    349-361
Measures: 
  • Citations: 

    0
  • Views: 

    32
  • Downloads: 

    0
Abstract: 

kernel estimation of the cumulative distribution function (CDF), when the support of the data is bounded, suffers from bias at the boundaries. To solve this problem, we introduce a new estimator for the CDF with support (0,1) based on the beta kernel function. By studying the asymptotic properties of the proposed estimator, we show that it is consistent and free from boundary bias. We conducted an extensive simulation to illustrate the performance of the proposed estimator. The results demonstrate the superiority of the proposed estimator over other commonly used estimators. As an application, we use the estimated CDF for nonparametric simulation. Using a numerical study, we show that the performance of the kernel probability density function (PDF) estimation in which a large sample simulated from the estimated CDF is employed can be noticeably improved. We also use the proposed estimator to estimate the CDF of the household health cost in Iran in 2019.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 32

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

KAYRI M. | ZIRHLIOGLU G.

Issue Info: 
  • Year: 

    2009
  • Volume: 

    2
  • Issue: 

    1
  • Pages: 

    49-54
Measures: 
  • Citations: 

    1
  • Views: 

    200
  • Downloads: 

    0
Keywords: 
Abstract: 

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 200

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 1 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

JABARI NOUGHABI H.

Issue Info: 
  • Year: 

    2009
  • Volume: 

    6
  • Issue: 

    2
  • Pages: 

    243-255
Measures: 
  • Citations: 

    0
  • Views: 

    885
  • Downloads: 

    162
Abstract: 

Let {Xn, n³1} be a strictly stationary sequence of negatively associated random variables, with common distribution function F. In this paper, we consider the estimation of the two-dimensional distribution function of (X1, Xk+1) for fixed kÎN based on kernel type estimators. We introduce asymptotic normality and properties and moments. From these we derive the optimal bandwidth convergence rate, which is of order n-1.Besides of some usual conditions on the kernel function, the conditions typically impose a convenient increase rate on the covariancescov (X1, Xn).

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 885

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 162 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2014
  • Volume: 

    8
Measures: 
  • Views: 

    164
  • Downloads: 

    93
Abstract: 

SOFT SENSORS CAN BE USED INSTEAD OF HARDWARE ANALYZERS IN DIRECT MEASUREMENT. SINCE HARDWARE ANALYZERS ARE USUALLY EXPENSIVE AND DIFFICULT TO MAINTAIN. MOREOVER, WHEN HARDWARE SENSORS ARE NOT AVAILABLE, SOFT-SENSORS ARE KEY TECHNOLOGIES FOR PRODUCING HIGH QUALITY PRODUCTS. SUPPORT VECTOR REGRESSION (SVR) IS AN EFFICIENT MACHINE LEARNING TECHNIQUE CAN BE USED FOR SOFT SENSOR DESIGN IN CHEMICAL PROCESSES. ONE OF THE MOST IMPORTANT FACTORS IN FORECASTING PERFORMANCE OF SVR IS kernel function. IN THIS STUDY, SVR MODEL IS EVALUATED AS PREDICTOR THE REACTOR PRODUCT QUALITY IN HDS PROCESS WITH FOUR DIFFERENT kernelS NAMELY LINEAR kernel, POLYNOMIAL kernel, SIGMOID kernel AND RADIAL BASIS function (RBF) kernel. ALSO GENETIC ALGORITHM (GA) WAS USED TO OPTIMIZE THE MODEL PARAMETERS. THE RESULTS SHOW THAT THE RBF kernel PRODUCES THE BEST PERFORMANCE INFORECASTING OF SULFUR CONTENT. THE PROPOSED METHOD PROVIDES A ROBUST TOOL TO PREDICT THE SULFUR CONTENT OF THE HDS PRODUCT IN A WIDE RANGE OF SULFUR CONTENT WITH GOOD ACCURACY.

Yearly Impact:   مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 164

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 93
Issue Info: 
  • Year: 

    2020
  • Volume: 

    54
  • Issue: 

    2
  • Pages: 

    123-128
Measures: 
  • Citations: 

    0
  • Views: 

    264
  • Downloads: 

    95
Abstract: 

In this paper, a new method called adaptive bandwidth in the kernel function has been used for two-dimensional upscaling of reservoir data. Bandwidth in the kernel can be considered as a variability parameter in porous media. Given that the variability of the reservoir characteristics depends on the complexity of the system, either in terms of geological structure or the specific feature distribution, variations must be considered differently for upscaling from a fine model to a coarse one. The upscaling algorithm, introduced in this paper, is based on the kernel function bandwidth, written in combination with the A* search algorithm and the first-depth search algorithm. In this algorithm, each cell in its x and y neighborhoods as well as the optimal bandwidth, obtained in two directions will be able to be merged with its adjacent cells. The upscaling process is performed on artificial data with 30×30 grid dimensions and SPE-10 model as real data. Four modes are used to start the point of upscaling and the process is performed according to the desired pattern, and in each case, the upscaling error and the number of final upscaled blocks are obtained. Based on the number of coarsen cells as well as the upscaling error, the first pattern is selected as the optimal pattern for synthetic data and the second pattern is selected as the optimal simulator model for real data. In this model, the number of cells was 236 and 3600, and the upscaling errors for synthetic and real data were 0. 4183 and 12. 2, respectively. The results of the upscaling in the real data were compared with the normalization method and showed that the upscaling error of the normalization method was 15 times the upscaling error of the kernel bandwidth algorithm.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 264

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 95 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    10
  • Issue: 

    3
  • Pages: 

    613-621
Measures: 
  • Citations: 

    0
  • Views: 

    523
  • Downloads: 

    110
Abstract: 

Upscaling based on the bandwidth of the kernel function is a flexible approach to upscale the data because the cells will be coarse-based on variability. The intensity of the coarsening of cells in this method can be controlled with bandwidth. In a smooth variability region, a large number of cells will be merged, and vice versa, they will remain fine with severe variability. Bandwidth variation can be effective in upscaling results. Therefore, determining the optimal bandwidth in this method is essential. For each bandwidth, the upscaled model has a number of upscaled blocks and an upscaling error. Obviously, higher thresholds or bandwidths cause a lower number of upscaled blocks and a higher sum of squares error (SSE). On the other hand, using the smallest bandwidth, the upscaled model will remain in a fine scale, and there will be practically no upscaling. In this work, different approaches are used to determine the optimal bandwidth or threshold for upscaling. Investigation of SSE changes, the intersection of two charts, namely SSE and the number of upscaled block charts, and the changes of SSE values versus bandwidths, are among these approaches. In this particular case, if the goal of upscaling is to minimize the upscaling error, the intersection method will obtain a better result. Conversely, if the purpose of upscaling is computational cost reduction, the SSE variation approach will be more appropriate for the threshold setting.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 523

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 110 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

KHASAHMADI SH. | GHOLAMI A.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    42
  • Issue: 

    3
  • Pages: 

    513-522
Measures: 
  • Citations: 

    0
  • Views: 

    752
  • Downloads: 

    0
Abstract: 

Velocity analysis is one of the most important step in seismic data processing. It affects not only many processing steps directly and indirectly, but also is known as a primary interpretation of the data. However, it can also be assumed as one of the most time consuming processing step. The conventional velocity analysis method measures the energy amplitude along hyperbolic trajectories within a velocity interval and creates a velocity model. In this procedure, the data from time-offset domain is mapped to time-velocity or time-slowness domain. For a number of Nr, Nh and Nv time, offset and velocity samples respectively Nr× Nh× Nv computations is necessary to obtain a velocity model. However, in the presence of large size data and model parameters, computing the velocity spectrum using conventional method would be a time consuming task. On the other hand, in order to improve the initial velocity model obtained in the processing steps, usually velocity analysis is conducted several times during the processing of the seismic data. Hence, there should be a better way to compute the velocity model in a much less time computation. In this paper, we introduce the Butterfly algorithm for fast computation of hyperbolic Radon transform (HRT), as a kind of time variant operator, with an application in seismic velocity analysis. In seismic data processing, Radon transforms map the overlapping data in seismic gathers to another domain which they can be separated.Among different types of Radon transforms, the HRT has the most similarity to the seismic events and hence, produce the most accurate approximation in the velocity spectrum. However, its time-variant kernel prohibits its fast computation especially for large size data. Unlike time-invariant operators which use the convolution theorem in the Fourier domain to compute the velocity domain for each frequency separately and therefore efficiently, Fourier transform of time-variant operators is a function of both frequency and time and using the convolution theorem is not applicable. The Butterfly algorithm can be used as a fast solver for the Fourier Integral Operators (FIO), so reformulating the HRT integral in the Fourier domain as FIO makes it possible to use this algorithm to overcome the problem of the time-variant kernel. The basis of this solution is the existence of low-rank approximations of the kernel when it is restricted to subdomains in data and model spaces. Subdividing the model and data domain properly to smaller subdomains admits low-rank approximations of the kernel. These low-rank approximations enable us to obtain functions of only one variable, time or frequency, which approximate the kernel. This decoupling of time and frequency variables allows fast computation of the HRT integral. In order to do the subdivision properly, a pair of quad trees, one for each data and model domains, is used to restrict the domains in a level-base structure in which the size of data domain subsets are increasing while the size of model domain subsets are decreasing in each level. The Butterfly algorithm is used to compute the kernel equivalent functions in each level of these quad trees for each subdomain. Finally, at the last level, the Radon panel or velocity model is obtained.The complexity of this method for two dimensional data is O (N2 log N) in which N depends on data and model variables range. As it was demonstrated in the synthetic and the real numerical examples, O (N2 log N) complexity results in reduction of computation time in several orders relative to the conventional method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 752

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

RAVALE U.

Issue Info: 
  • Year: 

    2015
  • Volume: 

    45
  • Issue: 

    -
  • Pages: 

    428-435
Measures: 
  • Citations: 

    1
  • Views: 

    142
  • Downloads: 

    0
Keywords: 
Abstract: 

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 142

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 1 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
litScript
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button